# ELECTRA Architecture
Monoelectra Large
Apache-2.0
A text reranking model based on ELECTRA architecture for relevance sorting of retrieval results
Text Embedding
Transformers English

M
cross-encoder
699
2
Monoelectra Base
Apache-2.0
A text ranking cross-encoder based on the ELECTRA architecture, designed for retrieval result reranking tasks
Text Embedding
Transformers English

M
cross-encoder
151
6
Monoelectra Base
Apache-2.0
lightning-ir is a cross-encoder model based on the ELECTRA architecture, specifically designed for text ranking tasks. The model optimizes paragraph reordering performance through large language model distillation techniques.
Large Language Model
M
webis
69
4
Electra Squad Training
Apache-2.0
ELECTRA-small model fine-tuned on SQuAD dataset for question answering tasks
Question Answering System
Transformers

E
mlxen
20
0
Electra Contrastdata Squad
Apache-2.0
This model is a fine-tuned version of the ELECTRA-small discriminator based on the SQuAD dataset, suitable for question-answering tasks.
Question Answering System
Transformers

E
mlxen
19
0
Electra Base Squad2
This is an English extractive question answering model based on the ELECTRA-base architecture, trained on the SQuAD 2.0 dataset, suitable for question answering tasks.
Question Answering System
E
bhadresh-savani
102
0
Electra Large Discriminator Squad2 512
This is a large-scale discriminator model based on the ELECTRA architecture, specifically fine-tuned for Q&A tasks on the SQuAD2.0 dataset, capable of handling both answerable and unanswerable question scenarios.
Question Answering System
Transformers

E
ahotrod
8,925
6
Tamillion
A Tamil pre-trained model based on the ELECTRA framework, with the second version trained on TPUs and expanded corpus scale
Large Language Model
Transformers Other

T
monsoon-nlp
58
2
Transformers Ud Japanese Electra Base Ginza 510
MIT
Japanese pretrained model based on ELECTRA architecture, pretrained on approximately 200 million Japanese sentences from the mC4 dataset and fine-tuned on UD_Japanese_BCCWJ corpus
Sequence Labeling
Transformers Japanese

T
megagonlabs
7,757
2
Qnli Electra Base
Apache-2.0
This is a cross-encoder model based on the ELECTRA architecture, specifically designed for natural language inference (NLI) in question-answering tasks, determining whether a given question can be answered by a specific paragraph.
Question Answering System
Transformers English

Q
cross-encoder
6,172
3
Ms Marco Electra Base
Apache-2.0
A cross-encoder trained on the ELECTRA-base architecture, specifically optimized for the MS Marco passage ranking task, used for query-passage relevance scoring in information retrieval.
Text Embedding
Transformers English

M
cross-encoder
118.93k
5
Electra Large Synqa
Apache-2.0
A two-stage training QA model based on ELECTRA-Large architecture, first trained on synthetic adversarial data and then fine-tuned on SQuAD and AdversarialQA datasets
Question Answering System
Transformers English

E
mbartolo
24
3
Bert Small Japanese Fin
This is a BERT model pre-trained on Japanese text, specifically optimized for the financial domain.
Large Language Model
Transformers Japanese

B
izumi-lab
4,446
2
Koelectra Base Generator
Apache-2.0
KoELECTRA is a Korean pretrained language model based on the ELECTRA architecture, developed by monologg. This model serves as the generator component, focusing on representation learning for Korean text.
Large Language Model
Transformers Korean

K
monologg
31
0
Featured Recommended AI Models